Interpretability of Deep Learning: Estimating importance scoresΒΆ

1000-719bMSB MIM UW, Neo Christopher ChungΒΆ

In this lab, we estimate importance scores using backpropagation, which is one of the first XAI methods. There are many names for scores that relate the input features to the output class. Saliency maps, feature attribution or importance scores all refer to the very closely related, if not the same, approach.

In the process, we also learn how to use a pre-trained model, called SqueezeNet (AlexNet-level accuracy with 50x fewer parameters and 0.5MB model size), which can be loaded directly from PyTorch.
https://arxiv.org/abs/1602.07360 https://en.wikipedia.org/wiki/SqueezeNet

We further look at the ImageNet which is one of the most popular and important database consisted of milliions of images across 20000 categories. For Colab, we use only a small portion of the ImageNet https://ieeexplore.ieee.org/document/5206848 https://en.wikipedia.org/wiki/ImageNet

Using these ingredients, we calculate backpropgagtion based importance scores from scratch.

Please be mindful of both original (multi-channel) values and summaried 2D values. Both are used and researched in practice.

Adapted from https://github.com/srinadhu/CS231n/blob/master/assignment3/NetworkVisualization-PyTorch.ipynb

InΒ [5]:
import torch
import torchvision
import torchvision.transforms as T
import random
import numpy as np
import pandas as pd
from scipy.ndimage.filters import gaussian_filter1d
import matplotlib.pyplot as plt
import seaborn as sns

from PIL import Image

from matplotlib import cm
# configuration for visualizing with
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'

SQUEEZENET_MEAN = np.array([0.485, 0.456, 0.406], dtype=np.float32)
SQUEEZENET_STD  = np.array([0.229, 0.224, 0.225], dtype=np.float32)
C:\Users\anast\AppData\Local\Temp\ipykernel_5980\2996544018.py:7: DeprecationWarning: Please import `gaussian_filter1d` from the `scipy.ndimage` namespace; the `scipy.ndimage.filters` namespace is deprecated and will be removed in SciPy 2.0.0.
  from scipy.ndimage.filters import gaussian_filter1d
InΒ [6]:
# helper functions for image processing
def preprocess(img, size=224):
    transform = T.Compose([
        T.Resize(size),
        T.ToTensor(),
        T.Normalize(mean=SQUEEZENET_MEAN.tolist(),
                    std=SQUEEZENET_STD.tolist()),
        T.Lambda(lambda x: x[None]),
    ])
    return transform(img)

def rescale(x):
    low, high = x.min(), x.max()
    x_rescaled = (x - low) / (high - low)
    return x_rescaled

def deprocess(img, should_rescale=True):
    transform = T.Compose([
        T.Lambda(lambda x: x[0]),
        T.Normalize(mean=[0, 0, 0], std=(1.0 / SQUEEZENET_STD).tolist()),
        T.Normalize(mean=(-SQUEEZENET_MEAN).tolist(), std=[1, 1, 1]),
        T.Lambda(rescale) if should_rescale else T.Lambda(lambda x: x),
        T.ToPILImage(),
    ])
    return transform(img)

def blur_image(X, sigma=1):
    X_np = X.cpu().clone().numpy()
    X_np = gaussian_filter1d(X_np, sigma, axis=2)
    X_np = gaussian_filter1d(X_np, sigma, axis=3)
    X.copy_(torch.Tensor(X_np).type_as(X))
    return X

# load small imagenet data
def load_imagenet_val(num=None):
    f = np.load('imagenet_val_25.npz', allow_pickle=True)
    X = f['X']
    y = f['y']
    class_names = f['label_map'].item()
    idx = np.arange(25)
    np.random.shuffle(idx)
    if num is not None:
        idx = idx[:num]
        X   = X[idx]
        y   = y[idx]
    return X, y, class_names

#X, y, class_names = load_imagenet_val(num=5)

#Load and use all 25 images from a smaller set, downloaded
f = np.load('imagenet_val_25.npz', allow_pickle=True)
X = f['X']
y = f['y']
class_names = f['label_map'].item()
print(X.shape)
print(y.shape)
(25, 224, 224, 3)
(25,)
InΒ [7]:
# check out which number relates to what class names
for y_val in y:
    print(class_names[y_val])
hay
quail
Tibetan mastiff
Border terrier
brown bear, bruin, Ursus arctos
soap dispenser
pajama, pyjama, pj's, jammies
gorilla, Gorilla gorilla
sports car, sport car
toilet tissue, toilet paper, bathroom tissue
stole
lakeside, lakeshore
pirate, pirate ship
bee eater
collie
turnstile
cardoon
Cardigan, Cardigan Welsh corgi
Christmas stocking
space shuttle
daisy
spatula
modem
vase
black swan, Cygnus atratus
InΒ [8]:
# show some images
plt.figure(figsize=(12, 6))
for i in range(5):
    plt.subplot(1, 5, i + 1)
    plt.imshow(X[i])
    plt.title(class_names[y[i]])
    plt.axis('off')
plt.gcf().tight_layout()
No description has been provided for this image

SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and 0.5MB model size https://arxiv.org/abs/1602.07360

Recent research on deep neural networks has focused primarily on improving accuracy. For a given accuracy level, it is typically possible to identify multiple DNN architectures that achieve that accuracy level. With equivalent accuracy, smaller DNN architectures offer at least three advantages: (1) Smaller DNNs require less communication across servers during distributed training. (2) Smaller DNNs require less bandwidth to export a new model from the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on FPGAs and other hardware with limited memory. To provide all of these advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. Additionally, with model compression techniques we are able to compress SqueezeNet to less than 0.5MB (510x smaller than AlexNet).

https://github.com/forresti/SqueezeNet

InΒ [9]:
# Iandola et al, "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and < 0.5MB model size", arXiv 2016
model = torchvision.models.squeezenet1_1(pretrained=True)
#print(model)

for param in model.parameters():
    param.requires_grad = False
c:\Users\anast\AppData\Local\Programs\Python\Python311\Lib\site-packages\torchvision\models\_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
  warnings.warn(
c:\Users\anast\AppData\Local\Programs\Python\Python311\Lib\site-packages\torchvision\models\_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=SqueezeNet1_1_Weights.IMAGENET1K_V1`. You can also use `weights=SqueezeNet1_1_Weights.DEFAULT` to get the most up-to-date weights.
  warnings.warn(msg)
InΒ [10]:
X_tensor = torch.cat([preprocess(Image.fromarray(x)) for x in X], dim=0)
y_tensor = torch.LongTensor(y)
model.eval()
scores = model(X_tensor)
print(scores)
scores_y = scores.gather(1, y_tensor.view(-1, 1)).squeeze()
print(scores_y)
tensor([[ 9.0406,  1.1808,  3.4227,  ...,  4.6864,  8.0145,  5.2129],
        [ 5.9101,  4.6083,  6.9259,  ...,  9.7415,  9.6305,  9.3974],
        [ 1.6097,  4.0396,  4.4560,  ...,  3.4892, 11.6411, 12.5561],
        ...,
        [ 5.5077,  3.8930,  3.3218,  ...,  4.5410,  7.9065, 15.4184],
        [ 7.6427,  8.8772,  4.0593,  ...,  9.6345,  7.5668, 10.8771],
        [ 8.6750, 13.4218, 11.4606,  ...,  6.1399,  5.2605, 10.4970]])
tensor([24.1313, 25.1475, 38.8825, 25.4514, 30.2723, 25.4353, 15.6568, 34.9214,
        22.9094, 13.7762, 18.1419, 10.5448, 23.5066, 46.3714, 39.0091, 27.1299,
        25.8614, 19.7288, 18.6807, 20.9641, 25.2686, 18.7046, 21.7245, 12.6422,
        15.0523])
InΒ [11]:
def compute_saliency_maps(X, y, model):
    """
    Compute a class saliency map using the model for images X and labels y.

    Input:
    - X: Input images; Tensor of shape (N, 3, H, W)
    - y: Labels for X; LongTensor of shape (N,)
    - model: A pretrained CNN that will be used to compute the saliency map.

    Returns:
    - saliency: A Tensor of shape (N, H, W) giving the saliency maps for the input
    images.
    """
    model.eval()
    X.requires_grad_()

    # 1. Forward pass
    scores = model(X)

    # 2. Get correct class scores
    scores = scores.gather(1, y.view(-1, 1)).squeeze()
    print("== class scores ==")
    print(scores)

    # 3. Backward pass
    scores_size = scores.shape
    ones_tensor = torch.ones(scores_size)
    scores.backward(ones_tensor)

    # 4. retrieve the gradient as saliency map
    saliency = X.grad
    return saliency

def compute_abs(saliency):
    saliency_abs = saliency.abs()
    return saliency_abs

def compute_max(saliency):
    saliency_max, _= torch.max(saliency, dim=1)
    return saliency_max
InΒ [12]:
## calculating gradients for CORRECT labels
# Convert X and y from numpy arrays to Torch Tensors
X_tensor = torch.cat([preprocess(Image.fromarray(x)) for x in X], dim=0)
y_tensor = torch.LongTensor(y)

# Compute saliency maps for images in X
saliency = compute_saliency_maps(X_tensor, y_tensor, model)
print(saliency.shape)

# Convert the saliency map from Torch Tensor to numpy array and show images
# and saliency maps together.
#saliency = saliency.numpy()
== class scores ==
tensor([24.1313, 25.1475, 38.8825, 25.4514, 30.2723, 25.4353, 15.6568, 34.9214,
        22.9094, 13.7762, 18.1419, 10.5448, 23.5066, 46.3714, 39.0091, 27.1299,
        25.8614, 19.7288, 18.6807, 20.9641, 25.2686, 18.7046, 21.7245, 12.6422,
        15.0523], grad_fn=<SqueezeBackward0>)
torch.Size([25, 3, 224, 224])
InΒ [13]:
# taking max or max-abs values are typical in the field
saliency_max = compute_max(saliency)
saliency_maxabs = compute_max(compute_abs(saliency))

# show a chosen image and saliency map
i=2

plt.figure(figsize=(6, 3))

plt.subplot(1, 2, 1)
plt.imshow(X[i])
plt.title(class_names[y[i]])
plt.axis('off')

plt.subplot(1, 2, 2)
plt.imshow(saliency_maxabs[i,:,:])
plt.title(class_names[y[i]])
plt.axis('off')

plt.gcf().tight_layout()
No description has been provided for this image
InΒ [14]:
# one could make a different color palette (see cmap)
# https://matplotlib.org/stable/users/explain/colors/colormaps.html

# even more control available
# hue_neg, hue_pos = 0, 359
# cmap = sns.diverging_palette(hue_neg, hue_pos, s=100, center="dark", as_cmap=True)

# show a chosen image and saliency map
i=2

plt.figure(figsize=(6, 3))

plt.subplot(1, 2, 1)
plt.imshow(X[i])
plt.title(class_names[y[i]])
plt.axis('off')

plt.subplot(1, 2, 2)
plt.imshow(saliency_maxabs[i,:,:], cmap=plt.cm.hot)
plt.title(class_names[y[i]])
plt.axis('off')

plt.gcf().tight_layout()
No description has been provided for this image
InΒ [15]:
# look at the actual values. we call these numbers importance scores
saliency_max[i,:,:].numpy()
Out[15]:
array([[-5.6959433e-04,  1.3885004e-03,  2.3240333e-03, ...,
        -8.9034060e-05,  4.9511982e-05,  0.0000000e+00],
       [ 2.9082103e-03,  6.1952830e-03,  7.6641212e-03, ...,
         4.2653101e-04,  2.0675986e-06,  0.0000000e+00],
       [ 9.0336129e-03,  7.4838907e-03,  6.1066970e-03, ...,
         1.1076747e-04,  3.6587339e-04,  0.0000000e+00],
       ...,
       [-2.7070096e-04,  2.3340620e-03, -1.3610231e-03, ...,
         9.5762481e-04,  9.9336915e-04,  0.0000000e+00],
       [ 4.3300042e-04,  7.3457870e-04,  3.6031934e-03, ...,
         7.6837727e-04,  6.5263757e-04,  0.0000000e+00],
       [ 0.0000000e+00,  0.0000000e+00,  0.0000000e+00, ...,
         0.0000000e+00,  0.0000000e+00,  0.0000000e+00]],
      shape=(224, 224), dtype=float32)
InΒ [16]:
# Plot multiple -- Note that you need to make a figure (5 samples) just like this in the homework, except you use SmoothGrad.
N = 5
for i in range(N):
    plt.subplot(2, N, i + 1)
    plt.imshow(X[i])
    plt.axis('off')
    plt.title(class_names[y[i]])
    plt.subplot(2, N, N + i + 1)
    plt.imshow(saliency_maxabs[i].numpy(), cmap=plt.cm.hot)
    plt.axis('off')
    plt.gcf().set_size_inches(12, 5)
plt.show()
No description has been provided for this image
InΒ [17]:
# look at the historgram of the importance scores (raw saliency map values)
plt.hist(saliency.numpy().flatten(), density=True, bins=1000)
plt.xlim([-.2,0.2])
Out[17]:
(-0.2, 0.2)
No description has been provided for this image
InΒ [18]:
# look at the historgram of max-abs importance scores
plt.hist(saliency_maxabs.numpy().flatten(), density=True, bins=1000)
plt.xlim([0,0.25])
Out[18]:
(0.0, 0.25)
No description has been provided for this image

SmoothGradΒΆ

Smilkov et al. (2017) β€œSmoothGrad: removing noise by adding noise”. The core idea is to take an image of interest, sample similar images by adding noise to the image, then take the average of the resulting sensitivity (saliency) maps for each sampled image.

Let's start building SmoothGrad.

InΒ [19]:
# function to add a noise to an image
def add_noise(x, noise_pct=0.05):
    # Calculate the noise level
    noise_level = noise_pct * np.std(x)
    noise = np.random.normal(0, noise_level, size=x.shape)

    # Add the noise to the sample
    noisy_sample = x + noise

    # Clip the values to ensure they remain within the valid range (0-255 for uint8 images)
    noisy_sample = np.clip(noisy_sample, 0, 255).astype(np.uint8)

    return noisy_sample

# Example
i = 2
sample = X[i]
print(np.std(sample))

noisy_sample = add_noise(x=sample, noise_pct=0.5)

# Visualize the noise-added sample
plt.figure(figsize=(3, 3))
plt.imshow(noisy_sample)
plt.title(class_names[y[i]] + " + noise")
plt.axis('off')
86.86013196652917
Out[19]:
(np.float64(-0.5), np.float64(223.5), np.float64(223.5), np.float64(-0.5))
No description has been provided for this image
InΒ [20]:
y_tensor
Out[20]:
tensor([958,  85, 244, 182, 294, 804, 697, 366, 817, 999, 824, 975, 724,  92,
        231, 877, 946, 264, 496, 812, 985, 813, 662, 883, 100])
InΒ [21]:
# Compute saliency map from a noisy image

# numpy array must be converted to a PyTorch tensor and processed using the same preprocess function
noisy_sample_tensor = torch.tensor(noisy_sample, dtype=torch.float32).permute(2, 0, 1).unsqueeze(0)
noisy_sample_tensor = preprocess(Image.fromarray(noisy_sample))

saliency = compute_saliency_maps(noisy_sample_tensor, y_tensor[i].unsqueeze(0), model)
saliency_max = compute_max(saliency)
saliency_maxabs = compute_max(compute_abs(saliency))

plt.figure(figsize=(6, 3))

plt.subplot(1, 2, 1)
plt.imshow(X[i])
plt.title(class_names[y[i]])
plt.axis('off')

plt.subplot(1, 2, 2)
plt.imshow(saliency_maxabs[0,:,:].detach().numpy(), cmap=plt.cm.hot)
plt.title(class_names[y[i]])
plt.axis('off')

plt.gcf().tight_layout()
== class scores ==
tensor(6.5884, grad_fn=<SqueezeBackward0>)
No description has been provided for this image

HomeworkΒΆ

Make a function to create SmoothGrad, where the input arguments are X, y, model, n, and noise_pct. For simplicity, we only consider max-abs values. The below is the step for the SmoothGrad function in details:

  1. Ues add_noise to add noise (controlled by noise_pct) to a sample
  2. The noisy sample is processed through compute_saliency_maps, where saliency_maxabs is saved. This process is repeat n times.
  3. Take and return the avergeof n saliency_maxabs arrays.

Visualize the first five images and their SmoothGrad heatmaps. See and compare that with the figure above using (vanilla) salincy maps.

Please submit the notebook and the PDF/PNG image of these five images and their SmoothGrad heatmaps.

Notes: increase noise

InΒ [22]:
class SmoothGrad:
    def __init__(self, model, n_processed=50, noise_pct=0.05):
        self.model = model
        self.n_processed = n_processed
        self.noise_pct = noise_pct

    def __call__(self, X, y):
        smoothed_saliency = None
        
        for _ in range(self.n_processed):
            noisy_X = add_noise(X, noise_pct=self.noise_pct)
            noisy_X_tensor = preprocess(Image.fromarray(noisy_X))
            
            noisy_saliency = compute_saliency_maps(noisy_X_tensor, y, self.model)
            noisy_saliency_maxabs = compute_max(compute_abs(noisy_saliency))
            
            if smoothed_saliency == None:
                smoothed_saliency = torch.zeros_like(noisy_saliency_maxabs)
            
            # Accumulate the saliency maps
            smoothed_saliency += noisy_saliency_maxabs

        # Take and return the averge of n saliency_maxabs arrays
        smoothed_saliency /= self.n_processed

        return smoothed_saliency
InΒ [30]:
# Visualize the first five images and their SmoothGrad heatmaps
def run_smoothgrad(model, X, y, N_img, n_iter, noise_pct):
    smoothgrad = SmoothGrad(model=model, n_processed=n_iter, noise_pct=noise_pct)
    y_tensor = torch.LongTensor(y)

    N = N_img
    plt.figure(figsize=(12, 5))
    for i in range(N):
        smoothed_saliency = smoothgrad(X[i], y_tensor[i].unsqueeze(0))
        smoothed_saliency_np = smoothed_saliency[0].detach().numpy()

        # original image
        plt.subplot(2, N, i + 1)
        plt.imshow(X[i])
        plt.axis('off')
        plt.title(class_names[y[i]])

        # SmoothGrad heatmap
        plt.subplot(2, N, N + i + 1)
        plt.imshow(smoothed_saliency_np, cmap=plt.cm.hot)
        plt.axis('off')

    plt.suptitle(f"SmoothGrad visualization with {noise_pct*100} % of noise", y=1.05)
    plt.tight_layout()
    plt.show()
InΒ [31]:
run_smoothgrad(model, X, y, N_img=5, n_iter=50, noise_pct=0.1)
== class scores ==
tensor(24.2508, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(24.2722, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(23.9552, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(23.9142, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(23.4413, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(24.4615, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(25.0056, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(25.5925, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(24.4901, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(24.6999, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(24.1564, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(23.5748, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(24.0268, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(24.7809, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(25.1376, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(23.5538, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(24.8552, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(23.1664, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(25.0783, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(24.0854, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(24.6098, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(24.1844, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(23.8227, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(24.3467, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(24.2148, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(24.1239, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(23.0803, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(24.1198, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(24.7839, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(23.8147, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(24.0984, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(24.7038, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(23.8683, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(24.1029, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(24.8244, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(24.3800, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(24.8993, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(24.7723, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(24.0361, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(24.3076, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(24.1077, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(23.8191, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(23.3960, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(23.6224, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(23.8824, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(23.7426, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(25.5379, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(24.7695, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(25.2060, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(24.0983, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(23.1140, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(22.6314, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(21.6621, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(22.2244, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(23.8488, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(22.4154, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(22.1811, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(23.3159, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(22.2739, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(23.5492, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(21.5106, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(23.9082, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(22.7699, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(21.7717, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(22.0495, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(23.1630, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(22.4567, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(22.9342, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(23.9342, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(22.8732, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(23.2367, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(21.8314, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(22.5011, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(21.7234, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(23.0628, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(21.1118, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(20.3815, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(23.2626, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(22.4480, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(21.6259, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(25.0284, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(24.2393, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(22.0325, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(23.5471, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(24.3649, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(22.5259, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(23.1352, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(22.8289, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(21.9945, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(23.2020, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(21.9380, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(23.2434, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(24.0701, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(23.0312, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(22.2517, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(22.8579, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(24.4299, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(22.7213, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(23.9594, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(24.1769, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(33.8043, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(33.1821, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(32.4408, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(34.9255, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(33.2150, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(35.1875, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(36.3558, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(33.2065, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(33.6692, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(35.2661, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(33.1923, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(34.3665, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(32.2005, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(35.8252, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(34.2312, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(34.8104, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(34.3050, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(32.9782, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(32.9945, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(33.6998, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(33.3352, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(33.4068, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(33.0410, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(34.5533, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(34.0289, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(33.2331, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(32.7065, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(34.2880, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(33.5484, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(33.8588, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(34.6264, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(34.9248, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(34.3460, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(33.4369, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(34.2583, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(34.0176, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(34.0047, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(34.7302, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(33.8116, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(33.1570, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(32.4278, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(34.3474, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(34.4545, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(32.4961, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(32.5414, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(32.7215, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(34.0498, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(35.4379, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(33.5586, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(33.9928, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(26.2923, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(25.4050, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(25.3361, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(27.4176, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(25.7727, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(25.4302, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(25.3335, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(24.7481, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(25.7403, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(26.4184, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(25.9048, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(24.8114, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(25.6878, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(25.1913, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(24.7640, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(25.5132, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(25.6600, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(25.8962, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(24.5025, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(25.5637, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(25.5860, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(24.1517, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(25.2831, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(23.8509, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(25.3398, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(25.3253, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(24.9111, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(27.0487, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(23.9176, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(25.0870, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(25.5491, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(24.7494, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(26.7108, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(25.4759, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(25.5353, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(25.7647, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(24.5657, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(25.8181, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(26.9012, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(25.3952, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(25.3576, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(25.8550, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(24.8942, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(25.8278, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(24.8729, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(25.0121, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(25.1018, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(24.1770, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(27.5972, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(25.8744, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(28.5708, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(29.2011, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(29.4472, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(29.0962, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(28.4191, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(28.1113, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(28.8075, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(29.0348, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(29.7560, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(28.5937, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(29.0443, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(29.2267, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(27.6100, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(28.2350, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(28.3995, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(28.7914, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(28.4171, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(27.6919, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(29.9581, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(29.9341, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(29.4332, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(28.1429, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(28.8973, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(28.6538, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(29.2102, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(28.4650, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(28.6560, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(27.6972, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(29.5591, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(27.3579, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(28.9952, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(29.4971, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(28.9515, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(29.5282, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(28.2704, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(29.4534, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(27.6518, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(28.6245, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(27.8507, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(29.6316, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(27.7578, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(28.7983, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(29.4030, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(28.4051, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(28.5821, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(30.3188, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(30.1729, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(26.9149, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(29.5189, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(29.7608, grad_fn=<SqueezeBackward0>)
No description has been provided for this image
InΒ [32]:
# and now adding more noise
run_smoothgrad(model, X, y, N_img=5, n_iter=50, noise_pct=0.5)
== class scores ==
tensor(11.7079, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(13.8810, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(13.8320, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(11.2859, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(14.3849, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(12.4536, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(12.8372, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(12.2712, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(11.5760, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(12.2600, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(12.6519, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(12.3667, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(12.3186, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(12.6220, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(13.4040, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(11.5664, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(12.7084, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(11.5899, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(12.1752, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(13.4912, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(13.3026, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(12.5524, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(14.4340, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(13.8254, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(13.2878, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(12.6122, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(11.7777, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(12.2951, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(12.1157, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(12.0236, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(12.6668, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(11.2824, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(11.6132, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(12.2323, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(11.6686, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(14.9107, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(12.3220, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(12.9747, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(12.5925, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(11.7905, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(13.5611, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(11.4946, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(11.9399, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(13.9135, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(12.7655, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(14.1630, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(12.5955, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(11.7675, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(14.5604, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(13.1653, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.2159, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.8523, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.1679, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.2027, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.9934, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.1450, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(9.3197, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(9.8337, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.4195, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.3284, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.8916, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.9181, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.4561, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.7485, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.0031, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.3155, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.2469, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.9068, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.9420, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.6597, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.6220, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.0722, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.0769, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.2260, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.7694, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(9.1110, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.3472, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.7763, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.6513, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.7558, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.6167, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(9.0209, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.5456, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(9.5256, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.3010, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.5101, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.3494, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.3119, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.9717, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.1863, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.9221, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.4338, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.8633, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.6626, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.8056, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.7376, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.1008, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(9.3929, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.8459, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(9.0124, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.6041, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.3082, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.9625, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.7882, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.2412, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.3637, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.8200, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.8307, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.6933, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.8776, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.3347, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.3500, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(4.8942, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.4150, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.8370, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.3001, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.2275, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.7937, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.7226, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.4528, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.4322, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.5525, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.2894, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.1583, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.6713, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.9516, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.2525, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.4813, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.8000, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.0514, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.0482, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.1922, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.9794, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.3523, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.4046, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.5939, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.8413, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.9564, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(4.9766, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.7737, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.0029, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.5619, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.7218, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.6379, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.9953, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.4133, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.2956, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.8924, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.3300, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.3273, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(11.9160, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(13.5323, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(12.4395, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(11.8081, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(12.1254, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(12.6911, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(12.2068, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(10.8279, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(13.0134, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(12.2046, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(15.1217, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(12.0904, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(11.5107, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(13.5862, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(13.6894, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(12.3553, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(14.6547, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(12.0480, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(13.9264, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(12.7682, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(11.2987, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(15.0424, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(11.3915, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(12.5407, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(13.1595, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(12.1301, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(11.7097, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(13.8079, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(13.9517, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(14.1052, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(11.5175, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(12.3843, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(12.2015, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(11.8324, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(11.9136, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(12.4595, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(13.2639, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(10.8113, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(11.5844, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(13.3961, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(11.9168, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(13.2870, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(10.7405, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(12.9312, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(11.7603, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(13.0685, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(11.5913, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(12.5030, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(11.0958, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(12.1714, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.9630, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.0908, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.9061, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.6962, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.3234, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.1416, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.5111, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.7473, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.4851, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.7938, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.9667, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.8992, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.4614, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.3202, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.9906, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.0410, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.9538, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.7342, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.8340, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.7242, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.2070, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.7402, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.2525, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.6145, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.4120, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.8883, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.6830, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.6547, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.1718, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.2904, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.1330, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.6075, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.8232, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.6547, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.7784, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.3555, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.4463, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.0905, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.1272, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.4229, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.9022, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.1382, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.6595, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.5848, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.1103, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.0893, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.0767, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.4796, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.3460, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.0443, grad_fn=<SqueezeBackward0>)
No description has been provided for this image
InΒ [33]:
# and now adding more noise
run_smoothgrad(model, X, y, N_img=5, n_iter=50, noise_pct=0.95)
== class scores ==
tensor(7.8542, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.9492, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.6118, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.2975, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.6139, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.2150, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.7292, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.2827, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.7567, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.3890, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.3554, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.8461, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.8328, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.0479, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.6701, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.9777, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.2548, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.3665, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.2463, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.3519, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.0177, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.6261, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.1860, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.8070, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.2131, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.3172, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.5924, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.8606, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.7801, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.1081, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.1070, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.5323, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.6191, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.6898, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.0566, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.1354, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.7734, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.0699, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.8806, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.2299, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.4106, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.5922, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.7365, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.5764, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.9461, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.8471, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.5190, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.7873, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.4526, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.2303, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.5258, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.7933, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.5128, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.0708, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(4.9824, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.8734, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.1005, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.2881, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.4121, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(4.4442, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(4.7175, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.1870, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.8671, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.1487, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(4.8684, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.1937, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(4.4359, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(4.2222, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.4063, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(4.3911, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.1624, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(4.8388, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.2365, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.9099, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(4.9972, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.0310, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.2712, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(4.2599, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.2528, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.5254, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(4.6323, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.0693, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(4.3339, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.2604, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(4.9659, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.9287, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(4.4518, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.2289, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.8820, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(4.6605, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.1357, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.4361, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(4.6613, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.1706, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.4556, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.8546, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.7729, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.7347, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.1590, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.2403, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(1.6790, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(2.1711, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(1.8189, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(1.6757, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(1.4410, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(1.5615, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(1.4109, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(1.0004, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(2.0800, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(1.8799, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(1.7911, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(2.6280, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(1.7571, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(2.1103, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(1.6423, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(1.6804, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(2.1325, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(1.5809, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(1.4559, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(1.3242, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(1.4578, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(1.5203, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(2.1091, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(1.8961, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(1.8194, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(1.7792, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(1.5242, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(1.7976, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(1.6893, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(1.7878, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(2.4529, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(1.7397, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(1.3769, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(1.4390, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(1.6673, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(1.3810, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(1.9194, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(1.8830, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(1.6974, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(1.4843, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(1.5809, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(2.0779, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(2.0173, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(2.9244, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(1.7155, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(1.9029, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(2.0084, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(1.5472, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(1.5137, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(1.9465, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.2703, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.6325, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.9379, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.3501, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.7800, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.2312, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.7744, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.2349, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.7967, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.1762, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(9.2550, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.2852, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.6091, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.2349, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(9.1340, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.7783, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.4189, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.1944, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.5660, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.7779, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(9.0218, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.0819, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(9.4653, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.6753, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(9.1534, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.3434, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.7257, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.0401, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.8595, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.5813, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.1463, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.8926, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.3748, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.3158, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.5722, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.4894, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.3806, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.0144, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.7718, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.4796, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(10.3094, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.5624, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.7874, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.2636, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.2475, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.2141, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.7369, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(7.9892, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(9.0773, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(8.6454, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.0888, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.5685, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.2541, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(4.7308, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.7862, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.3012, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.4186, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.9173, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.3284, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.0285, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.1318, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.0588, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.2026, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.7734, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.9014, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.3198, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.1755, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.3979, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.4798, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.6397, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.0658, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.5929, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.7009, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.7115, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.1879, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.9468, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.9628, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.1627, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.3505, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.5611, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.6069, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.1264, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.6989, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.8974, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.7980, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(4.9882, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.3751, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.4644, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.8873, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(4.8659, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.0361, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.0777, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.6207, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.9265, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.5481, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.7006, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(6.1996, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.6375, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.2158, grad_fn=<SqueezeBackward0>)
== class scores ==
tensor(5.5579, grad_fn=<SqueezeBackward0>)
No description has been provided for this image

ConclusionsΒΆ

Here is demostrated how to estimate feature importance scores in deep learning models, showing how input pixels affect the model's output for a given class. Saliency maps with raw gradients appear noisy and highlight scattered pixels.

SmoothGrad, an enchanced version of this idea, reduces noise in saliency maps by averaging multiple gradient calculations on noisy versions of the input image. Noise percentage from 0.1 to 0.5 produce interpretable results, but increasing it to 0.95 distort important features, as on example saliency map of hay, on which the whole background is selected as important feature.